Modules
Embeddings​
class eole.modules.transformer_mlp.MLP(model_config, running_config=None)[source]​
Bases: Module
A two/three-layer Feed-Forward-Network.
- Parameters:
- model_config – eole.config.models.ModelConfig object
- running_config – TrainingConfig or InferenceConfig derived from RunningConfig
forward(x)[source]​
Layer definition.
- Parameters:
x –
(batch_size, input_len, model_dim)
- Returns:
Output
(batch_size, input_len, model_dim)
. - Return type: (FloatTensor)
Encoders​
class eole.encoders.TransformerEncoder(model_config, running_config=None)[source]​
Bases: EncoderBase
The Transformer encoder from “Attention is All You Need” []
- Parameters:
- model_config (eole.config.TransformerEncoderConfig) – full encoder config
- embeddings (eole.modules.Embeddings) – embeddings to use, should have positional encodings
- running_config (TrainingConfig / InferenceConfig)
- Returns:
- enc_out
(batch_size, src_len, model_dim)
- encoder final state: None in the case of Transformer
- src_len
(batch_size)
- enc_out
- Return type: (torch.FloatTensor, torch.FloatTensor)
forward(emb, mask=None)[source]​
See EncoderBase.forward()
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
class eole.encoders.RNNEncoder(model_config, running_config=None)[source]​
Bases: EncoderBase
A generic recurrent neural network encoder.
- Parameters:
- model_config (eole.config.ModelConfig)
- running_config (TrainingConfig / InferenceConfig)
forward(emb, mask=None)[source]​
See EncoderBase.forward()
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
class eole.encoders.CNNEncoder(model_config, running_config=None)[source]​
Bases: EncoderBase
Encoder based on “Convolutional Sequence to Sequence Learning” [].
forward(emb, mask=None)[source]​
See EncoderBase.forward()
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
class eole.encoders.MeanEncoder(model_config, running_config=None)[source]​
Bases: EncoderBase
A trivial non-recurrent encoder. Simply applies mean pooling.
- Parameters:
- model_config (eole.config.ModelConfig)
- embeddings (eole.modules.Embeddings) – embeddings to use, should have positional encodings
- running_config (TrainingConfig / InferenceConfig)
forward(emb, mask=None)[source]​
See EncoderBase.forward()
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
Decoders​
class eole.decoders.TransformerDecoder(model_config, running_config=None)[source]​
Bases: TransformerDecoderBase
The Transformer decoder from “Attention is All You Need”. []
- Parameters:
- model_config (eole.config.TransformerDecoderConfig) – full decoder config
- embeddings (eole.modules.Embeddings) – embeddings to use, should have positional encodings
- running_config (TrainingConfig / InferenceConfig)
forward(emb, **kwargs)[source]​
Decode, possibly stepwise.
class eole.decoders.rnn_decoder.RNNDecoderBase(model_config, running_config=None)[source]​
Bases: DecoderBase
Base recurrent attention-based decoder class.
Specifies the interface used by different decoder types
and required by BaseModel
.
- Parameters:
- model_config (eole.config.DecoderConfig) – full decoder config
- running_config (TrainingConfig / InferenceConfig)
forward(emb, enc_out, src_len=None, step=None, **kwargs)[source]​
- Parameters:
- emb (FloatTensor) – input embeddings
(batch, tgt_len, dim)
. - enc_out (FloatTensor) – vectors from the encoder
(batch, src_len, hidden)
. - src_len (LongTensor) – the padded source lengths
(batch,)
.
- emb (FloatTensor) – input embeddings
- Returns:
- dec_outs: output from the decoder (after attn)
(batch, tgt_len, hidden)
. - attns: distribution over src at each tgt
(batch, tgt_len, src_len)
.
- dec_outs: output from the decoder (after attn)
- Return type: (FloatTensor, dict[str, FloatTensor])
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
init_state(**kwargs)[source]​
Initialize decoder state with last state of the encoder.
class eole.decoders.StdRNNDecoder(model_config, running_config=None)[source]​
Bases: RNNDecoderBase
Standard fully batched RNN decoder with attention.
Faster implementation, uses CuDNN for implementation.
See RNNDecoderBase
for options.
Based around the approach from “Neural Machine Translation By Jointly Learning To Align and Translate” []
Implemented without input_feeding and currently with no coverage_attn
class eole.decoders.InputFeedRNNDecoder(model_config, running_config=None)[source]​
Bases: RNNDecoderBase
Input feeding based decoder.
See RNNDecoderBase
for options.
Based around the input feeding approach from “Effective Approaches to Attention-based Neural Machine Translation” []
class eole.decoders.CNNDecoder(model_config, running_config=None)[source]​
Bases: DecoderBase
Decoder based on “Convolutional Sequence to Sequence Learning” [].
Consists of residual convolutional layers, with ConvMultiStepAttention.
forward(emb, enc_out, step=None, **kwargs)[source]​
See eole.modules.RNNDecoderBase.forward()
classmethod from_config(model_config, running_config=None)[source]​
Alternate constructor.
init_state(**kwargs)[source]​
Init decoder state.
Attention​
class eole.modules.structured_attention.MatrixTree(eps=1e-05)[source]​
Bases: Module
Implementation of the matrix-tree theorem for computing marginals of non-projective dependency parsing. This attention layer is used in the paper “Learning Structured Text Representations” [].
forward(input)[source]​
Define the computation performed at every call.
Should be overridden by all subclasses.
NOTE​
Although the recipe for forward pass needs to be defined within
this function, one should call the Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.